76 research outputs found
Neuroplasticity, neural reuse, and the language module
What conception of mental architecture can survive the
evidence of neuroplasticity and neural reuse in the human brain?
In particular, what sorts of modules are compatible with this
evidence? I aim to show how developmental and adult
neuroplasticity, as well as evidence of pervasive neural reuse,
forces us to revise the standard conception of modularity and
spells the end of a hardwired and dedicated language module. I
argue from principles of both neural reuse and neural redundancy
that language is facilitated by a composite of modules (or
module-like entities), few if any of which are likely to be
linguistically special, and that neuroplasticity provides
evidence that (in key respects and to an appreciable extent) few
if any of them ought to be considered developmentally robust,
though their development does seem to be constrained by features
intrinsic to particular regions of cortex (manifesting as
domain-specific predispositions or acquisition biases). In the
course of doing so I articulate a schematically and
neurobiologically precise framework for understanding modules and
their supramodular interactions
Neural redundancy and its relation to neural reuse
Evidence of the pervasiveness of neural reuse in the human brain has forced a revision of the standard conception of modularity in the cognitive sciences. One persistent line of argument against such revision, however, draws from a large body of experimental literature attesting to the existence of cognitive dissociations. While numerous rejoinders to this argument have been offered over the years, few have grappled seriously with the phenomenon. This paper offers a fresh perspective. It takes the dissociations seriously, on the one hand, while affirming that traditional modularities of mind do not do justice to the evidence of neural reuse, on the other. The key to the puzzle is neural redundancy. The paper offers both a philosophical analysis of the relation between reuse and redundancy, as well as a plausible solution to the problem of dissociations
Explaining machine learning decisions
The operations of deep networks are widely acknowledged to be inscrutable. The growing field of âExplainable AIâ (XAI) has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a shortcoming of the field of XAI, I argue that it is broadly the right approach to the problem
Recommended from our members
Algorithmic Decision-Making and the Control Problem
Abstract: The danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it âthe control problemâ, understood as the tendency of the human within a humanâmachine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly âbetter than humanâ in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary (and potentially dynamic) coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such humanâmachine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such humanâmachine system as well as guiding the design and implementation of such systems generally
Government Use of Artificial Intelligence in New Zealand
Final Report on Phase 1 of the New Zealand Law Foundationâs Artificial Intelligence and Law in New Zealand Projec
- âŠ